Mandelbrot set drawing

Most algorithms and programs that draw the Mandelbrot set use the simple iteration of z2+c → z starting with z=0, until the absolute value of z exceeds 2. That is very fast and simple method, but it has one big disadvantage: you can never be sure that a given point really is a part of the Mandelbrot set, or it just haven't escaped yet.

My goal is to create a procedure that, given a point in the complex plane, runs until it decides whether the point is or is not a part of the Mandelbrot set. This will perhaps be slower than the common algorithm, but, due to automatic adaptation of the iteration count, will show every detail of the set.

Outline of the algorithm

One obvious way we can be sure that a point will never escape, is if the iterations of z2+c → z enter a cycle. Then they will repeat this cycle forever. However, exact cycle is only entered by a few points inside the set. Other points just get closer and closer to a cycle. Since we use a finite precision (at least the memory of the computer is finite anyway), they will eventually converge to an exact cycle. That, however, may take many iterations.

So, how can we know if a series of iterations (named "main iterations" hereafter) converges to a cycle?

The search for the attractor cycle

First, it's relatively easy to find the cycle, if we know the length of the cycle. Time for some math:
z0 = Pcοp(z0)
That says that if we apply the map
Pc(z) = z2+c → z
to a periodic point z0 for n times, we arrive back at z0. But, Pcοp(z) is just a polynomial:
Pcο0(z) = z
Pcο1(z) = z2+c
Pcο2(z) = (z2+c)2+c

So, we can use Newton or Laguerre or any other iterative method to find the roots of Pcοn(z) - z = 0 . Most often it will only take about 5-6 iterations of Newton algorithm or 3 iterations of Laguerre, unless we are at a multiple root - these will be discussed later. Of course, the equation has many, in fact 2n roots, and we need the solver to converge to the solution closest to the current main iteration value. This can be helped, if not ensured, by seeding the solver with the current main iteration value. If we are still far from the cycle, the solver may converge to a different root than the main iterations; it will be up to later parts of the algorithm to deal with that.

The derivative check

Now, let's have a look at the equation z0 = Pcοp(z0) from a different view. We'll simplify the following discussion from complex to real numbers, but the arguments hold for complex numbers as well.

Let's paint successive iterations of Pcοp(z) → z to a graph, starting with some value of z. Red, vertical, lines show the evaluation of Pcοp(z) (or any other function); the black, horizontal, lines show the plugging of the output back to the input.
You can see that cases b and c converge to a solution, while a and d do not. Why is it so? Actually, it's because the graph of the function (the blue graph) is not decreasing nor increasing too fast (its angle is between -45° and +45°).

Mathematically speaking, the absolute value of the derivative of Pcοp(z) with respect to z is below 1. This last result also holds for the case of complex numbers. So, if we can check that the absolute value of the derivative of Pcοp(z) is below 1 everywhere between the current main iteration and the final cycle value z0, we know that the main iterations will eventually converge to that cycle. That's the heart of the algorithm.

Performing the derivative check

How can we make sure that on a given interval, a function never exceeds a given bound? One simple way would be to evaluate the function many times and check if the bound is exceeded. That is obviously a wrong way since we don't know which "many" would be enough. Besides, it might be quite slow, since we are actually working with complex numbers. While sampling an interval in real numbers takes, let's say, 10 evaluations, sampling that same interval in complex numbers with the same precision takes 79 evaluations (we're sampling a disc instead of a line).

There is another way, called interval arithmetic. It does not give precise results but upper bounds, so if the result of the interval arithmetic fits within the bounds, we know that the original result does as well. So, at the cost of precision, we have reliability.

There are many ways to perform interval arithmetic in complex numbers. One simple but efficient, and quite precise, way is to store complex not just as a+b⋅i (i being the imaginary unit), but as a+b⋅i+e⋅D, D being a unit disc, and e a real number. What it means that a number is not a point but a filled disc of radius e centered at a+b⋅i. The rules for addition, subtraction and multiplication of such numbers are simple: the real and imaginary parts of the result are the same as if we were working with points. The resulting radius for addition and subtraction is the sum of the radii of the operands; the radius of the result of multiplication is
e=e1 |Z2| + e2 |Z1| + e1 e2
where Z1 and Z2 are the operands, Z1=a1+b1⋅i+e1⋅D; |Z| is the absolute value of the center of Z, |Z|=√(a2 + b2).

So, if we take a disc, centered at the final cycle value z0 and covering the current main iteration value, and evaluate the derivative of Pcοp(z) at the disc using interval arithmetic, we have upper bounds for the derivative anywhere on that disc. If the absolute value of these bounds does not exceed 1, we know the absolute value of the derivative does not exceed 1 either. That means that the iterations of Pcοp(z) → z will converge to some fixed point, and we know it's z0.

Because interval arithmetic is very conservative, it will often happen that the interval arithmetic will say the derivative does not fit but in reality, it does. There are two ways to deal with that: first, we can try to use a more precise model for a complex number interval than a disc, and second, we can just live with it. If the derivative does not fit, just keep iterating the main loop and later, check the convergence again. Since the main iterations are converging towards the attractor cycle, the checked radius will get smaller and smaller, until the imprecise bounds fit eventually.

I've tried to improve the interval arithmetic precision with little success, so this is still the weakest part of the algorithm as a whole.

Search for the period

The last thing we need to describe, and the first we need to find, is the period candidate. From what we've seen, it's clear that if we make a mistake guessing the period, the derivative check will fail. Thus it is not critical that we find the period exactly everytime, we just need to be sure we'll have the right guess from time to time.

The simplest way is the Floyd's algorithm, modified to work with distance instead of an exact match:

  1. Perform two iterations of the main loop: z ← (z2+c)2+c
  2. Perform one iteration of the main loop: h ← h2+c
  3. Check if the distance between z and h is lower than ever. If this is so, we have found a cycle candidate. The candidate period is the difference between the "age" of z and h, or the "age" of h, or the number of iterations of this algorithm we've performed already. The candidate value is the current value of z (it's probably closer to the attractor cycle than the current h value).
  4. Otherwise the values of z and h are far away, and we keep looping until they are close.

 

This algorithm works but it gives many false guesses, and takes much time to make the right guess. The following picture shows how successive iterations converge to a cycle of length 1 (a single point). That means we should have a cycle candidate at every main iteration. Instead, of the first 13 iterations, we have only 3 candidates, and only every 13-th iteration afterwards.
This happens near the points where a higher period bulb is attached to a lower period bulb (in this case, a period 13-bulb attached to the 1-period cardioid). I'll describe a better algorithm in a dedicated section.

Period polishing

The algorithm for the derivative check will confirm that a point will eventually converge to a cycle of given length p. However, cycles, you know, repeat. So the check will also succeed for the same point as before, and period 2p, or 3p, or any other integer multiple of p, since these also make a cycle. So, after a cycle is confirmed, if we want to know the exact period, we need to make sure we've found the right p, not its multiple.

 

How do we check it? In our example, if the period check fails a few times at the beginning, we'll only get candidate periods that are multiples of 13. So, what we should do is, after a succesful period check, to check for any period that divides the period we've found. We could even simplify that to just performing p additional main iterations (starting with z0) and seeing if we arrive back at z0 before p main iterations.

That would work, in theory. But we're working with a computer representation, not with exact numbers. Thus, we'd need some distance limit (given by the working precision) in advance. There's another way that I like better. Remember how the period check works, by looking at the absolute value of the derivative of Pcοp(z0)? Suppose we evaluate this derivative at all points in the cycle. If there's no value below 1, we know there cannot be a shorter cycle - the check would fail.

However, it does sometimes happen that there's a derivative that is below 1 but it does not make a real cycle. Through experiments, I've found it is possible to rule out this case reliably by checking the whole period:

We already know it works for p'=p, since then the only thing to check is that ∂/∂z Pcοp(z0) ≤

d/dz Pcο0(z0) = 1 .

 

Final refinement

Once we know the main iterations will converge to a cycle, we know that the point is part of the Mandelbrot set; we also know the cycle exactly (up to the machine precision) thanks to Newton or Laguerre iteration, without needing to evaluate the main iteration many, many times. That gives us enough information to find an interior distance estimate (see Wikipedia).

Of course, if, at any point, the main iterations leave the disc of radius 2, we know the point is outside the Mandelbrot set. Then it's possible to find an exterior distance estimate (see Wikipedia again).

The exterior distance estimate is given by
b=2 ln(|F|) |F|/|Fc|
where F is the value of Pcοn(c) and Fc is its derivative with respect to c. The error in this estimate is proportional to 1/|F|, so the n should be high enough for the |F| to be reasonably large. We can see that, as n increases, if |Fc| grows to very high values while |F| stays small, the exterior distance estimate will be very close to 0. This is what happens, most importantly, on the real axis between -2 and -1.401156 (as well as some other points). While the iterations are bound forever to the -2..2 real interval, they never enter a cycle except for a few small subintervals. This way, our algorithm, as described up to this point, would iterate forever, trying to find a cycle or escape the disc of radius 2 around the complex zero. However, I've found through experiments, that the derivative of Pcοn(c) with respect to c grows at every iteration. So, I've added the evaluation of the derivative to the main loop, along with a check. If the magnitude of the derivative gets near the largest representable floating point number (I've used 1030), it's clear from the equation for the exterior distance estimate, that the point we're testing is closer than 10−30 to the Mandelbrot set. So, these points form a special case I call "boundary". Whether this is really the case is an open question to me.

 

The Newton and Laguerre iteration

The Newton-Raphson iterative method for solving an equation f(z)=0 works by evaluating the function F←f(r) and its derivative Fz←f'(r) at a given point r, then plugging these into the following equation:
rnew ← r − F/Fz
This gives a new point rnew, that is used again in the same equation until the function value F is close enough to zero, or until the rnew is very close to r. Here, "close enough" means "with respect to the machine precision".

The method has two drawbacks. First, if f(z)=0 has only complex roots but we start with a real r, a complex root will never be found. It's overcome by a simple countermeasure: if the value F has increased from the last iteration, r is shifted a bit in the complex plane.

The other problem is that it converges very slowly around roots with multiplicity higher than 1. This is commonly overcome by solving f(z)/f'(z)=0 instead of f(z)=0. In this case, the Newton iteration becomes
rnew ← r − (F/Fz)/(1−F Fzz/(Fz)2)
where F and Fz are as before, and Fzz means the second derivative of f(z) with respect to z, evaluated at r. My implementation switches between the two schemes automatically, choosing the second if both F and Fz are close to 0.

Laguerre iteration is generally considered superior to the Newton iteration, because it converges faster and more reliably. It also requires the evaluation of F, Fz, and Fzz, but it also needs the total expected number of roots, n. For polynomials, it's the degree of the polynomial. Then, it finds two numbers G and H,
G ← Fz/F
H ← G2-Fzz/F
Then,
rnew ← r - n/(G±√((n-1)(nH-G2)))
where the sign of the square root is taken to maximize the absolute value of the denominator.

It has a similar problem with multiple roots, that I overcome by a home-brew method: first, G and H are as before. Then multiplicity m is estimated as
m ← Round(Re(G2/H))
If m is lower than 1, it is set to 1; if it's higher than n, it's set to n. Then a modified Laguerre iteration follows:
rnew ← r - n/(G±√((n-m)(nH-G2)/m))

This modified Laguerre method converges very fast in all cases. However, is has an unexpected drawback: it requires the order of the polynomial, n. In our case, for p iterations of z2+c→z, the polynomial has the degree n=2p. For large p, this can exceed the largest floating-point number that the computer can represent.
To use Laguerre for large periods, the equation needs to be rearranged, see the sources.

The period check

Besides the derivative check, we also check the numbers for exact match, as described (and refuted) in the outline. Other than that, it's pretty obvious.

The search for the candidate period

The simple algorithm described at the outline is not very good, as has already been explained. Another possibility is to use the derivative d/dz Pcοn(z) again. We know that if the absolute value of the derivative is below 1, we have a period candidate. The problem is, if the period check fails a few times due to imprecise interval arithmetic, we'll eventually be checking very high period multiples, and the accumulation of rounding errors may then lead to errors that could have been avoided. However, it's not sufficient to count main iterations between the n's where the derivative falls below 1. For example, I've seen a case where the period was 33 but it only took 4 main iterations for the derivative to fall below 1, then the next iteration was below 1 again, then it wasn't this low for another 28 iterations. Then the cycle repeated. So, the candidate periods were 4, 1, 28, 4, 1, 28, ... but it never found the 33=4+1+28.

 

My current solution is to remember one candidate point. As the main iteration finds a candidate, I remember the value of the derivative. If it's not lower than ever, I just add the candidate period to an accumulator. Otherwise I check the accumulated period, reset the accumulator, and remember new lowest accumulated derivative.

This has the disadvantage that the accumulated period could be, in the case above, both 4+1+28 or 4+1+28+4, so it's needed to check both the accumulated period and the accumulated plus the candidate. See the source code if you're in doubt what I mean.

This part of the algorithm clearly needs some more research.

The period polishing

Just as described in the outline.

Internal distance estimation

It's simply the evaluation of the equation from the Wikipedia. One thing to note is that sometimes, the result is negative.

That happens if the absolute value of the derivative ∂/∂z Pcοn(z0) is above 1. That cannot happen if we use the derivative check as described above. However, if we start at such point, we have an exact match, and the period check succeeds immediately, before checking the derivative.

So, we know the derivative at that point is above 1, so it cannot be an interior point. But we do have a period. What that means is that we've actually found a Misiurewicz point.

The program

The application using the algorithms described here is no longer available due to space limitations on this server.

A highly improved version can be downloaded here. It uses experimental techniques that seem to work but don't have proofs.

This version also adds a computation of external angle, external and internal rays, and more coloring options. To some degree, it can also show orbit diagrams and Julia sets.

Conversion to Java

I've partially converted the application to Java. The applet is embedded below. The main features are:

Things I'm planning to improve:

Things that are not gonna change:

Sources can be downloaded here.

Older Delphi version (contains bugs and is bad overall) sources for Delphi (includes Windows .exe)

Missing plugin?